34 research outputs found
Should We Learn Probabilistic Models for Model Checking? A New Approach and An Empirical Study
Many automated system analysis techniques (e.g., model checking, model-based
testing) rely on first obtaining a model of the system under analysis. System
modeling is often done manually, which is often considered as a hindrance to
adopt model-based system analysis and development techniques. To overcome this
problem, researchers have proposed to automatically "learn" models based on
sample system executions and shown that the learned models can be useful
sometimes. There are however many questions to be answered. For instance, how
much shall we generalize from the observed samples and how fast would learning
converge? Or, would the analysis result based on the learned model be more
accurate than the estimation we could have obtained by sampling many system
executions within the same amount of time? In this work, we investigate
existing algorithms for learning probabilistic models for model checking,
propose an evolution-based approach for better controlling the degree of
generalization and conduct an empirical study in order to answer the questions.
One of our findings is that the effectiveness of learning may sometimes be
limited.Comment: 15 pages, plus 2 reference pages, accepted by FASE 2017 in ETAP
Exploring behaviors of stochastic differential equation models of biological systems using change of measures
Stochastic Differential Equations (SDE) are often used to model the stochastic dynamics of biological systems. Unfortunately, rare but biologically interesting behaviors (e.g., oncogenesis) can be difficult to observe in stochastic models. Consequently, the analysis of behaviors of SDE models using numerical simulations can be challenging. We introduce a method for solving the following problem: given a SDE model and a high-level behavioral specification about the dynamics of the model, algorithmically decide whether the model satisfies the specification. While there are a number of techniques for addressing this problem for discrete-state stochastic models, the analysis of SDE and other continuous-state models has received less attention. Our proposed solution uses a combination of Bayesian sequential hypothesis testing, non-identically distributed samples, and Girsanov's theorem for change of measures to examine rare behaviors. We use our algorithm to analyze two SDE models of tumor dynamics. Our use of non-identically distributed samples sampling contributes to the state of the art in statistical verification and model checking of stochastic models by providing an effective means for exposing rare events in SDEs, while retaining the ability to compute bounds on the probability that those events occur
Exploring Design Alternatives for RAMP Transactions through Statistical Model Checking
In this paper we explore and extend the design space of the
recent RAMP (Read Atomic Multi-Partition) transaction system for
large-scale partitioned data stores. Arriving at a mature distributed system design through implementation and experimental validation is a labor-intensive task, so that only a limited number of design alternatives can be explored in practice. The developers of RAMP did implement and validate three design alternatives for RAMP, and sketched three
additional designs. This work addresses two questions: (1) How can the design space of a distributed transaction system such as RAMP be explored with modest effort, so that substantial
knowledge about design alternatives can be gained before
designs are implemented? and (2) How realistic and informative are the results of such design explorations? We answer the first question by: (i) formally modeling eight RAMP-like designs (five by the RAMP developers and three of our own) in Maude as probabilistic rewrite theories, and (ii) using statistical model checking of those models to analyze key performance metrics such as throughput, average latency, and degrees of strong
consistency and read atomicity. We answer the second question by showing that our quantitative analyses: (i) are consistent with the experimental results obtained by the RAMP developers for their implemented designs; (ii) confirm the conjectures made by the RAMP developers for their other three unimplemented designs; and (iii) uncover some promising new designs that seem attractive for some applications.Ope
The quantitative verification benchmark set
A great publication
Sequential schemes for frequentist estimation of properties in statistical model checking
National Research Foundation (NRF) Singapor
Formal Modeling and Analysis of the Walter Transactional Data Store
Walter is a distributed partially replicated data store
providing Parallel Snapshot Isolation (PSI), an important consistency property that offers attractive performance while ensuring adequate guarantees for certain kinds of applications.
In this work we formally model Walter's design in Maude and formally specify and verify PSI by model checking. To also analyze Walter's performance we extend the Maude specification of Walter to a probabilistic rewrite theory and perform statistical model checking analysis to evaluate Walter's throughput for a wide range of workloads. Our performance results are consistent
with a previous experimental evaluation and throw new light on
Walter's performance for different workloads not evaluated before.Ope